摘要 :
The rapid growth of wireless data has caused cellular networks (3G/LTE/4G) to be overburdened, which affects the user's quality of experience. In a mobile edge computing (MEC) architecture, caching the content at the base stations...
展开
The rapid growth of wireless data has caused cellular networks (3G/LTE/4G) to be overburdened, which affects the user's quality of experience. In a mobile edge computing (MEC) architecture, caching the content at the base stations cooperatively is a prudent solution and this reduces the user-perceived latency as it brings the content closer to the user and minimizes the burden on the backhaul. However, to enhance the quality of service in delay-sensitive and time-critical applications, the requested content should be served within the deadline. Therefore, maximizing the storage utilization while maximizing the saved delay in the mobile edge network is a critical problem. To address these challenges, in this paper, we formulate a cache placement problem in mobile edge networks as placing the contents at different MECs (base stations) to maximize the saved delay with capacity and deadline constraints. The problem is modeled as an integer linear programming problem for content placement in mobile edge network. A relaxation and rounding method is presented to solve the integer linear programming problem. Further, we propose a fuzzy logic based caching algorithm that considers deadline, benefit and content request prediction in caching decisions. In the proposed algorithm, the echo state network is used to predict content request distribution. Extensive simulation results show that the proposed fuzzy caching scheme significantly improves the performance in terms of acceleration ratio, hit ratio and the number of files satisfying deadline on MovieLens dataset as compared with three existing caching techniques.
收起
摘要 :
The emerging mobile edge networks with content caching capability allows end users to receive Information from adjacent edge servers directly instead of a centralized data warehouse, thus the network transmission delay and system ...
展开
The emerging mobile edge networks with content caching capability allows end users to receive Information from adjacent edge servers directly instead of a centralized data warehouse, thus the network transmission delay and system throughput can be improved significantly. Since the duplicate content transmissions between edge network and remote cloud can be reduced, the appropriate caching strategy can also improve the system energy efficiency of mobile edge networks to a great extent. This paper focuses on how to improve the network energy efficiency and proposes an intelligent caching strategy according to the cached content distribution model for mobile edge networks based on promising deep reinforcement learning algorithm. The deep neural network (DNN) and Q-learning algorithm are combined to design a deep reinforcement learning framework named as the deep-Q neural network (DQN), in which the DNN is adopted to represent the approximation of action-state value function in the Q-learning solution. The parameters iteration strategies in the proposed DQN algorithm were improved through stochastic gradient descent method, so the DQN algorithm could converge to the optimal solution quickly, and the network performance of the content caching policy can be optimized. The simulation results show that the proposed intelligent DQN-based content cache strategy with enough training steps could improve the energy efficiency of the mobile edge networks significantly.
收起
摘要 :
With the explosion of data volume, it becomes challenging to deliver high quality service to mobile users. Therefore, edge caching has received significant attentions since it can bring contents near mobile users, to boost spectra...
展开
With the explosion of data volume, it becomes challenging to deliver high quality service to mobile users. Therefore, edge caching has received significant attentions since it can bring contents near mobile users, to boost spectral efficiency, and reduce backhaul load of mobile networks. Due to limited storage resources within mobile networks, it is important to improve efficiency of content management in edge caching. In this article, we propose an integrated content-centric mobile network framework for edge caching in 5G networks, which can leverage content-centric networking (CCN), achieve content-oriented information management, and increase content delivery efficiency. We elaborately design the cache-enabled mobile network architecture, CCN based function entities, CCN embedded protocol stack, and content retrieval process, and develop several effective approaches for tackling practical implementation constraints of CCN based edge caching. We demonstrate that our content caching strategies can significantly enhance edge caching performance. To further improve the performance of content-centric mobile edge caching, we identify promising open research directions.
收起
摘要 :
Edge caching could greatly relieve the burden of the backbone network and reduce the content request latency experienced by end-user devices. This makes edge caching a promising technology for enabling data-intensive and latency-s...
展开
Edge caching could greatly relieve the burden of the backbone network and reduce the content request latency experienced by end-user devices. This makes edge caching a promising technology for enabling data-intensive and latency-sensitive applications on the eve of the large-scale commercial operation of 5G. However, the slow-start phenomenon incurred by existing request history-based caching strategies limits the performance of wireless edge caching, especially in the dynamic scenario where both mobile devices and contents arrive and leave periodically. On the other hand, it is also a hard task for deep reinforcement learning-based methods to adapt to the dynamics of the environment. In this backdrop, a new caching algorithm, called Similarity-Aware Popularity-based Caching (SAPoC), is presented in this paper to promote the performance of wireless edge caching in dynamic scenarios through utilizing the similarity among contents. In SAPoC algorithm, a content?s popularity is determined by not only its requests history but also its similarity with existing popular ones to enable a quick-start of newly arrived contents. A series of simulation experiments are conducted to evaluate SAPoC algorithm?s performance. Results have shown that SAPoC outperforms several typical proposals in both cache hit ratio and energy consumption.
收起
摘要 :
This two-part paper investigates cache replacement schemes with the objective of developing a general model to unify the analysis of various replacement schemes and illustrate their features. To achieve this goal, we study the dyn...
展开
This two-part paper investigates cache replacement schemes with the objective of developing a general model to unify the analysis of various replacement schemes and illustrate their features. To achieve this goal, we study the dynamic process of caching in the vector space and introduce the concept of state transition field (STF) to model and characterize replacement schemes. In the first part of this work, we consider the case of time-invariant content popularity based on the independent reference model (IRM). In such case, we demonstrate that the resulting STFs are static, and each replacement scheme leads to a unique STF. The STF determines the expected trace of the dynamic change in the cache state distribution, as a result of content requests and replacements, from any initial point. Moreover, given the replacement scheme, the STF is only determined by the content popularity. Using four example schemes including random replacement (RR) and least recently used (LRU), we show that the STF can be used to analyze replacement schemes such as finding their steady states, highlighting their differences, and revealing insights regarding the impact of knowledge of content popularity. Based on the above results, STF is shown to be useful for characterizing and illustrating replacement schemes. Extensive numeric results are presented to demonstrate analytical STFs and STFs from simulations for the considered example replacement schemes.
收起
摘要 :
Mobile edge computing (MEC) is an important research
topic in the field of wireless communication and mobile
computing, as it can effectively decrease the latency and
energy consumption due to the trade-off between the communic...
展开
Mobile edge computing (MEC) is an important research
topic in the field of wireless communication and mobile
computing, as it can effectively decrease the latency and
energy consumption due to the trade-off between the communication
and computing,where some intensive computing
tasks can be offloaded to computational access points
(CAPs), especially when the wireless transmission channel
is in good condition. This article studies how to intelligently
allocate the computing capability and wireless bandwidth
among users for a cache-aided multi-terminal multi-CAP
MEC network with non-ideal channel estimation, where
there are N mobile terminals and M CAPs in the network.
Each terminal has some tasks that need to be computed in a
fast and efficient way. For such a system, we first design the
system by jointly considering the computing capability and
wireless bandwidth allocation, where the computing and
communication delay is used as the performance of metric.
To optimize the systemperformance, we then employ deep
deterministic policy gradient to learn an effective strategy
on the allocation of computing capability and wireless
bandwidth, in order to decrease the system delay as much
as possible. Simulations are finally conducted to show the
superiority of the proposed studies in this article, especially
about the advantages from cache.
收起
摘要 :
Edge caching is a promising strategy to reduce the traffic load in mobile core networks, by caching popular contents at the edge of mobile networks, e.g., base stations (BSs). For the BSs to transmit data to user equipment (UEs), ...
展开
Edge caching is a promising strategy to reduce the traffic load in mobile core networks, by caching popular contents at the edge of mobile networks, e.g., base stations (BSs). For the BSs to transmit data to user equipment (UEs), coded multicast can be used to improve the transmission efficiency. Moreover, in future densely deployed networks, the coverage areas of adjacent BSs are overlapped. Therefore, these cache-aided BSs can collaboratively carry out coded multicast to improve the performance of data transmission. In this paper, we consider the Data Placement and Transmission Scheduling (DPTS) problem for cached-aided coded multicast in mobile edge networks. Our objective is to minimize the total cost of data download from the data source to BSs and data transmission from the BSs to UEs. The DPTS problem is proved to be NP-hard and we first propose an Iterative Relaxation Linear Programming (IRLP) algorithm to solve it. Since the complexity of IRLP algorithm is high, we also propose another two low-complexity algorithms to solve the DPTS problem. Performance evaluation by simulation shows that the proposed algorithms can achieve a substantial reduction in total cost of data download and transmission, compared with the existing methods.
收起
摘要 :
Mobile edge caching can deliver contents directly without the backhaul link, which can effectively solve the problem of spectrum scarcity caused by huge mobile data traffic. In this paper, different from the existing user-centric ...
展开
Mobile edge caching can deliver contents directly without the backhaul link, which can effectively solve the problem of spectrum scarcity caused by huge mobile data traffic. In this paper, different from the existing user-centric clustering algorithms, a distributed caching algorithm is proposed based on content providers (CPs), which can form a CPs cluster as large as possible to satisfy the UE requirements. The cache capacity in the cluster formed by this algorithm is collectively used to provide higher content hit probability and diversity. Furthermore, considering the impact of social interests on the performance of caching strategies, a closed-form expression of the network hit ratio of the entire cache is derived on the basis of random geometry theory. Then, a network hit ratio maximization optimization problem is constructed and solved. The simulation results show that the proposed strategy has superior data offloading performance than other cooperative caching strategies. (C) 2022 Elsevier Inc. All rights reserved.
收起
摘要 :
High-speed internet has boosted clients' traffic needs. Content caching on mobile edge computing (MEC) servers reduces traffic and latency. Caching with MEC faces difficulties such as user mobility, limited storage, varying user p...
展开
High-speed internet has boosted clients' traffic needs. Content caching on mobile edge computing (MEC) servers reduces traffic and latency. Caching with MEC faces difficulties such as user mobility, limited storage, varying user preferences, and rising video streaming needs. The current content caching techniques consider user mobility and content popularity to improve the experience. However, no present solution addresses user preferences and mobility, affecting caching decisions. We propose mobility- and user-preferences-aware caching for MEC. Using time series, the proposed system finds mobility patterns and groups nearby trajectories. Using cosine similarity and CF, we predict and cache user-requested content. CF predicts the popularity of grouped-based content to improve the cache hit ratio and reduce delay compared to baseline techniques.
收起
摘要 :
To address the vast multimedia traffic volume and requirements of user quality of experience in the next-generation mobile communication system (5G), it is imperative to develop efficient content caching strategy at mobile network...
展开
To address the vast multimedia traffic volume and requirements of user quality of experience in the next-generation mobile communication system (5G), it is imperative to develop efficient content caching strategy at mobile network edges, which is deemed as a key technique for 5G. Recent advances in edge/cloud computing and machine learning facilitate efficient content caching for 5G, where mobile edge computing can be exploited to reduce service latency by equipping computation and storage capacity at the edge network. In this paper, we propose a proactive caching mechanism named learning-based cooperative caching (LECC) strategy based on mobile edge computing architecture to reduce transmission cost while improving user quality of experience for future mobile networks. In LECC, we exploit a transfer learning-based approach for estimating content popularity and then formulate the proactive caching optimization model. As the optimization problem is NP-hard, we resort to a greedy algorithm for solving the cache content placement problem. Performance evaluation reveals that LECC can apparently improve content cache hit rate and decrease content delivery latency and transmission cost in comparison with known existing caching strategies.
收起